Recent work has shown that fine-tuning large pre-trained language models on a collection of tasks described via instructions, a.k.a. instruction-tuning, improves their zero and few-shot generalization to unseen tasks. However, there is a limited understanding of the performance trade-offs of different decisions made during the instruction-tuning process. These decisions include the scale and diversity of the instruction-tuning benchmark, different task sampling strategies, fine-tuning with and without demonstrations, training using specialized datasets for reasoning and dialogue, and finally, the fine-tuning objectives themselves. In this paper, we characterize the effect of instruction-tuning decisions on downstream task performance when scaling both model and benchmark sizes. To this end, we create OPT-IML Bench: a large benchmark for Instruction Meta-Learning (IML) of 2000 NLP tasks consolidated into task categories from 8 existing benchmarks, and prepare an evaluation framework to measure three types of model generalizations: to tasks from fully held-out categories, to held-out tasks from seen categories, and to held-out instances from seen tasks. Through the lens of this framework, we first present insights about instruction-tuning decisions as applied to OPT-30B and further exploit these insights to train OPT-IML 30B and 175B, which are instruction-tuned versions of OPT. OPT-IML demonstrates all three generalization abilities at both scales on four different evaluation benchmarks with diverse tasks and input formats -- PromptSource, FLAN, Super-NaturalInstructions, and UnifiedSKG. Not only does it significantly outperform OPT on all benchmarks but is also highly competitive with existing models fine-tuned on each specific benchmark. We release OPT-IML at both scales, together with the OPT-IML Bench evaluation framework.
translated by 谷歌翻译
This paper focuses on the broadcast of information on robot networks with stochastic network interconnection topologies. Problematic communication networks are almost unavoidable in areas where we wish to deploy multi-robotic systems, usually due to a lack of environmental consistency, accessibility, and structure. We tackle this problem by modeling the broadcast of information in a multi-robot communication network as a stochastic process with random arrival times, which can be produced by irregular robot movements, wireless attenuation, and other environmental factors. Using this model, we provide and analyze a receding horizon control strategy to control the statistics of the information broadcast. The resulting strategy compels the robots to re-direct their communication resources to different neighbors according to the current propagation process to fulfill global broadcast requirements. Based on this method, we provide an approach to compute the expected time to broadcast the message to all nodes. Numerical examples are provided to illustrate the results.
translated by 谷歌翻译
Determining causal effects of temporal multi-intervention assists decision-making. Restricted by time-varying bias, selection bias, and interactions of multiple interventions, the disentanglement and estimation of multiple treatment effects from individual temporal data is still rare. To tackle these challenges, we propose a comprehensive framework of temporal counterfactual forecasting from an individual multiple treatment perspective (TCFimt). TCFimt constructs adversarial tasks in a seq2seq framework to alleviate selection and time-varying bias and designs a contrastive learning-based block to decouple a mixed treatment effect into separated main treatment effects and causal interactions which further improves estimation accuracy. Through implementing experiments on two real-world datasets from distinct fields, the proposed method shows satisfactory performance in predicting future outcomes with specific treatments and in choosing optimal treatment type and timing than state-of-the-art methods.
translated by 谷歌翻译
In this paper, we study a novel and widely existing problem in graph matching (GM), namely, Bi-level Noisy Correspondence (BNC), which refers to node-level noisy correspondence (NNC) and edge-level noisy correspondence (ENC). In brief, on the one hand, due to the poor recognizability and viewpoint differences between images, it is inevitable to inaccurately annotate some keypoints with offset and confusion, leading to the mismatch between two associated nodes, i.e., NNC. On the other hand, the noisy node-to-node correspondence will further contaminate the edge-to-edge correspondence, thus leading to ENC. For the BNC challenge, we propose a novel method termed Contrastive Matching with Momentum Distillation. Specifically, the proposed method is with a robust quadratic contrastive loss which enjoys the following merits: i) better exploring the node-to-node and edge-to-edge correlations through a GM customized quadratic contrastive learning paradigm; ii) adaptively penalizing the noisy assignments based on the confidence estimated by the momentum teacher. Extensive experiments on three real-world datasets show the robustness of our model compared with 12 competitive baselines.
translated by 谷歌翻译
Objective: We aim to develop an open-source natural language processing (NLP) package, SODA (i.e., SOcial DeterminAnts), with pre-trained transformer models to extract social determinants of health (SDoH) for cancer patients, examine the generalizability of SODA to a new disease domain (i.e., opioid use), and evaluate the extraction rate of SDoH using cancer populations. Methods: We identified SDoH categories and attributes and developed an SDoH corpus using clinical notes from a general cancer cohort. We compared four transformer-based NLP models to extract SDoH, examined the generalizability of NLP models to a cohort of patients prescribed with opioids, and explored customization strategies to improve performance. We applied the best NLP model to extract 19 categories of SDoH from the breast (n=7,971), lung (n=11,804), and colorectal cancer (n=6,240) cohorts. Results and Conclusion: We developed a corpus of 629 cancer patients notes with annotations of 13,193 SDoH concepts/attributes from 19 categories of SDoH. The Bidirectional Encoder Representations from Transformers (BERT) model achieved the best strict/lenient F1 scores of 0.9216 and 0.9441 for SDoH concept extraction, 0.9617 and 0.9626 for linking attributes to SDoH concepts. Fine-tuning the NLP models using new annotations from opioid use patients improved the strict/lenient F1 scores from 0.8172/0.8502 to 0.8312/0.8679. The extraction rates among 19 categories of SDoH varied greatly, where 10 SDoH could be extracted from >70% of cancer patients, but 9 SDoH had a low extraction rate (<70% of cancer patients). The SODA package with pre-trained transformer models is publicly available at https://github.com/uf-hobiinformatics-lab/SDoH_SODA.
translated by 谷歌翻译
We introduce a lightweight network to improve descriptors of keypoints within the same image. The network takes the original descriptors and the geometric properties of keypoints as the input, and uses an MLP-based self-boosting stage and a Transformer-based cross-boosting stage to enhance the descriptors. The enhanced descriptors can be either real-valued or binary ones. We use the proposed network to boost both hand-crafted (ORB, SIFT) and the state-of-the-art learning-based descriptors (SuperPoint, ALIKE) and evaluate them on image matching, visual localization, and structure-from-motion tasks. The results show that our method significantly improves the performance of each task, particularly in challenging cases such as large illumination changes or repetitive patterns. Our method requires only 3.2ms on desktop GPU and 27ms on embedded GPU to process 2000 features, which is fast enough to be applied to a practical system.
translated by 谷歌翻译
草图在快速执行的徒手绘图时会形成直观而有力的视觉表达。我们提出了一种从场景草图中综合现实照片的方法。不需要草图和照片对,我们的框架直接以无监督的方式从随时可用的大型照片数据集中学习。为此,我们引入了一个标准化模块,该模块在训练期间通过将照片和草图转换为标准化域,即边缘地图,从而提供伪素描 - 光谱对。草图和照片之间的域间隙减少也使我们可以将它们分为两个组成部分:整体场景结构和低级视觉样式,例如颜色和纹理。利用这一优势,我们通过结合草图的结构和参考照片的视觉样式来合成照片真实的图像。关于感知相似性指标和人类感知研究的广泛实验结果表明,该方法可以从场景草图和跑赢大于最先进的照片合成基准中产生逼真的照片。我们还证明,我们的框架通过编辑相应草图的笔触来促进对照片综合的可控操作,从而比依赖于区域级编辑的以前的方法提供了更多细粒度的细节。
translated by 谷歌翻译
我们介绍了一项对自然语言(NL)推理的人类通知,开放域和逻辑上复杂且多样的数据集,配备了一阶逻辑(fol)注释。对开本由1,435个示例(独特的结论)组成,每个示例与487组前提之一搭配,这些场所作为规则,可用于演绎理由,以理解每个结论的有效性。前提和结论的逻辑正确性是通过其平行注释来确保的,这些注释会自动由我们的FOL推理引擎验证。除了主要的NL推理任务外,对开本中的NL-FOL对自动构成了使用FOL作为逻辑形式的新的NL-FOL翻译数据集。我们对广泛的实验系统地评估了对中型语言模型(BERT,ROBERTA)进行微调的FOL推理能力,并且在大型语言模型(GPT-NEOX,OPT,OPT,GPT-3,Codex)上促成了很少的射击。对于NL-FOL翻译,我们尝试使用GPT-3和Codex。我们的结果表明,公开可用的最强大的大语言模型之一(LLM),GPT-3 Davinci,仅比随机结果略好,而在一部分集的一部分中,该模型尤其不好,并且在预测该模型方面尤其不好。纠正虚假和未知结论的真实价值。我们的数据集和代码可在https://github.com/yale-lily/folio上找到。
translated by 谷歌翻译
现代机器学习系统在大型数据集中培训时取得了巨大的成功。但是,这些数据集通常包含敏感信息(例如医疗记录,面部图像),导致严重的隐私问题。差异化私有生成模型(DPGM)通过生成私有化的敏感数据来避免此类隐私问题的解决方案。与其他差异私人(DP)学习者类似,DPGM的主要挑战也是如何在效用和隐私之间取得微妙的平衡。我们提出了DP $^2 $ -VAE,这是一种具有可证明的DP保证的变性自动编码器(VAE)的新型培训机制,并通过\ emph {pre-emph {pre-emph {prec-emph {pret-emph {pret-training}。在相同的DP约束下,DP $^2 $ -VAE最大程度地减少了训练过程中的扰动噪声,从而改善了实用性。 DP $^2 $ -VAE非常灵活,并且对许多其他VAE变体都很容易适应。从理论上讲,我们研究了预训练对私人数据的影响。从经验上讲,我们在图像数据集上进行了广泛的实验,以说明我们在各种隐私预算和评估指标下对基准的优越性。
translated by 谷歌翻译
本文提出了一种新颖的统一特征优化(UFO)范式,用于训练和在现实世界和大规模场景下进行深层模型,这需要集合多个AI功能。不明飞行物的目标是通过对所有任务进行大规模预修。与众所周知的基础模型相比,UFO具有两个不同的重点,即相对较小的模型大小,没有适应性成本:1)UFO以多任务学习方式将广泛的任务挤入中等尺寸的统一模型中并在转移到下游任务时进一步修剪模型大小。 2)不明飞行物不强调转移到新任务。相反,它旨在使修剪模型专门用于一个或多个已经看到的任务。有了这两个特征,UFO为灵活的部署提供了极大的便利,同时保持了大规模预处理的好处。 UFO的一个关键优点是修剪过程不仅可以减少模型的大小和推理消耗,而且还提高了某些任务的准确性。具体而言,UFO考虑了多任务培训,并对统一模型产生了两倍的影响:一些密切相关的任务具有相互利益,而某些任务相互冲突。不明飞行物设法通过新颖的网络体系结构搜索(NAS)方法来减少冲突并保留相互利益。对各种深度表示学习任务(即面部识别,人重新识别,车辆重新识别和产品检索)的实验表明,从UFO中修剪的模型比单件任务训练的对应物更高,但却具有更高的准确性较小的型号大小,验证不明飞行物的概念。此外,UFO还支持发布170亿个参数计算机视觉(CV)基础模型,该模型是该行业中最大的CV模型。
translated by 谷歌翻译